自动图像分割技术对于视觉分析至关重要。自动编码器体系结构在各种图像分割任务中具有令人满意的性能。但是,基于卷积神经网络(CNN)的自动编码器似乎在提高语义分割的准确性方面遇到了瓶颈。增加前景和背景之间的类间距离是分割网络的固有特征。但是,分割网络过于关注前景和背景之间的主要视觉差异,而忽略了详细的边缘信息,从而导致边缘分割的准确性降低。在本文中,我们提出了一个基于多任务学习的轻量级端到端细分框架,称为Edge Coasity AutoCododer Network(EAA-NET),以提高边缘细分能力。我们的方法不仅利用分割网络来获得类间特征,而且还采用重建网络来提取前景中的类内特征。我们进一步设计了一个阶层和类间特征融合模块-I2融合模块。 I2融合模块用于合并课内和类间特征,并使用软注意机制去除无效的背景信息。实验结果表明,我们的方法在医疗图像分割任务中的表现良好。 EAA-NET易于实现,并且计算成本较小。
translated by 谷歌翻译
虽然注释大量的数据以满足复杂的学习模型,但对于许多现实世界中的应用程序可能会过于良好。主动学习(AL)和半监督学习(SSL)是两个有效但经常被隔离的方法,可以减轻渴望数据的问题。最近的一些研究探索了将AL和SSL相结合以更好地探测未标记数据的潜力。但是,几乎所有这些当代的SSL-AL作品都采用了简单的组合策略,忽略了SSL和AL的固有关系。此外,在处理大规模,高维数据集时,其他方法则遭受高计算成本。通过标记数据的行业实践的激励,我们提出了一种基于创新的基于不一致的虚拟对抗性积极学习(理想)算法,以进一步研究SSL-AL的潜在优势,并实现Al和SSL的相互增强,即SSL,即SSL宣传标签信息,以使标签信息无标记的样本信息并为Al提供平滑的嵌入,而AL排除了具有不一致的预测和相当不确定性的样品。我们通过不同粒度的增强策略(包括细粒度的连续扰动探索和粗粒数据转换)来估计未标记的样品的不一致。在文本和图像域中,广泛的实验验证了所提出的算法的有效性,并将其与最先进的基线进行了比较。两项实际案例研究可视化应用和部署所提出的数据采样算法的实际工业价值。
translated by 谷歌翻译
文本指导的图像编辑模型显示出了显着的结果。但是,还有两个问题。首先,他们采用固定的操纵模块来满足各种编辑要求(例如,更改颜色,纹理更改,内容添加和删除),从而导致编辑过度编辑或不足。其次,他们没有清楚地区分文本要求的和文本 - 略带的部分,从而导致编辑不准确。为了解决这些局限性,我们提出:(i)动态编辑块(DEBLOCK),该块(DEBLOCK)以各种编辑要求动态组成不同的编辑模块。 (ii)一个组成预测变量(COMP-PRED),可根据目标文本和源图像的推断来预测deBlock的组成权重。 (iii)动态文本自适应卷积块(dcblock),该块(dcblock)查询源图像特征,以区分文本需要的零件和文本 - iRrelevant零件。广泛的实验表明,我们的DE-NET可实现出色的性能,并更正确,准确地操纵源图像。代码可在\ url {https://github.com/tobran/de-net}中获得。
translated by 谷歌翻译
Color fundus photography and Optical Coherence Tomography (OCT) are the two most cost-effective tools for glaucoma screening. Both two modalities of images have prominent biomarkers to indicate glaucoma suspected. Clinically, it is often recommended to take both of the screenings for a more accurate and reliable diagnosis. However, although numerous algorithms are proposed based on fundus images or OCT volumes in computer-aided diagnosis, there are still few methods leveraging both of the modalities for the glaucoma assessment. Inspired by the success of Retinal Fundus Glaucoma Challenge (REFUGE) we held previously, we set up the Glaucoma grAding from Multi-Modality imAges (GAMMA) Challenge to encourage the development of fundus \& OCT-based glaucoma grading. The primary task of the challenge is to grade glaucoma from both the 2D fundus images and 3D OCT scanning volumes. As part of GAMMA, we have publicly released a glaucoma annotated dataset with both 2D fundus color photography and 3D OCT volumes, which is the first multi-modality dataset for glaucoma grading. In addition, an evaluation framework is also established to evaluate the performance of the submitted methods. During the challenge, 1272 results were submitted, and finally, top-10 teams were selected to the final stage. We analysis their results and summarize their methods in the paper. Since all these teams submitted their source code in the challenge, a detailed ablation study is also conducted to verify the effectiveness of the particular modules proposed. We find many of the proposed techniques are practical for the clinical diagnosis of glaucoma. As the first in-depth study of fundus \& OCT multi-modality glaucoma grading, we believe the GAMMA Challenge will be an essential starting point for future research.
translated by 谷歌翻译
培训RGB-D突出物体检测(SOD)的深层模型通常需要大量标记的RGB-D图像。然而,不容易获取RGB-D数据,这限制了RGB-D SOD技术的发展。为了减轻这个问题,我们介绍了双半RGB-D突出物体检测网络(DS-Net),以利用未标记的RGB图像来提高RGB-D显着性检测。我们首先设计了深度去耦卷积神经网络(DDCNN),其包含深度估计分支和显着性检测分支。深度估计分支用RGB-D图像训练,然后用于估计所有未标记的RGB图像的伪深度映射以形成配对数据。显着性检测分支用于熔断RGB特征和深度特征以预测RGB-D显着性。然后,整个DDCNN被分配为师生学生框架中的骨干,用于半监督学习。此外,我们还引入了对未标记数据的中间注意力和显着性图的一致性损失,以及标记数据的监督深度和显着性损失。七种广泛使用的基准数据集上的实验结果表明,我们的DDCNN定量和定性地优于最先进的方法。我们还证明,即使在使用具有伪深度图的RGB图像时,我们的半监控DS-Net也可以进一步提高性能。
translated by 谷歌翻译
剪辑在零拍传输学习任务上产生了令人印象深刻的结果,并被视为BERT或GPT3等基础模型。具有丰富表示形式的剪辑视觉模型是使用Infonce目标和自然语言监督对特定任务进行微调之前进行预训练的。尽管剪辑在零拍传输学习方面表现出色,但它遭受了解释的问题,也就是说,它的重点是一个或几个功能,同时忽略了其他相关功能。该问题是由于原始多模式数据中未充分提取协方差结构而引起的。我们建议使用现代Hopfield网络来解决解释的问题。他们检索到的嵌入具有富集的协方差结构,该结构源自存储嵌入中特征的共发生。但是,现代的Hopfield网络增加了阻碍学习的Infonce目标的饱和效应。我们建议使用Infoloob目标来减轻这种饱和效果。我们介绍了小说``对比抛弃了一个增压'(Cloob),该小说使用现代的Hopfield网络与Infoloob Opportions一起进行协方差丰富。在实验中,我们将Cloob与概念标题进行预培训后的剪辑和YFCC数据集进行了比较,相对于其在其他数据集上的零拍传输学习性能。 Cloob在所有考虑的架构和数据集中始终在零摄像转移学习上胜过剪辑。
translated by 谷歌翻译
In this work, we propose "Residual Attention Network", a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers.Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and ImageNet (4.8% single model and single crop, top-5 error). Note that, our method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69% forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association's 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
translated by 谷歌翻译